12 research outputs found

    Lip syncing method for realistic expressive 3D face model

    Get PDF
    Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor

    Facial expression animation through action units transfer in latent space

    Get PDF
    Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis

    Oxygenation absorption and light scattering driven facial animation of natural virtual human

    No full text
    The color of skin is one of the key indicators of bodily change that affect facial expressions. The skin colour tenacity is majorly determined by the light effect and concentration from the chromophores inside skin and haemoglobin oxygenation in the blood. We did an extension work of previous researcher: Donner and Jensen approach to develop a realistic textured 3D facial animation model. Both surface and subsurface effect of light with the skin is considered. Six model parameters are used to control the amount of oxygenation, de-oxygenation, hemoglobin, melanin, oil and blend factor for different types of melanin in the skin in creating a perfect match. Pulse Oximetry and 3D skin analyzer are used to determine the correlation between blood oxygenation and basic natural emotional expressions. The multi-pole method for layered materials is applied to calculate the spectral diffusion profiles of two-layered skin in simulating the subsurface scattering. Torrance-Sparrow bidirectional reflectance distribution function (BRDF) is employed to simulate the light interaction with an oily skin surface stratum. Unity3D is exploited for shading programming to implement advanced real-time rendering. Five basic natural human facial emotive appearance such as angry, happy, neutral, sad and fear are simulated for Asian, European and Middle East male and female. It is suggested that our tailored approach may be helpful for the development of virtual reality and serious games application

    Narrative Experiences of History and Complex Systems

    No full text
    The chapter considers elements at play in the establishment of our current historical knowledge. Looking at past events as complex adaptive systems, it demonstrates why the current mediation of history is oversimplified. By formulating the possibility of a complex narrative matrix (environment), it explores its potential in offering both an archive of evidence drawn from multiple agents, and presenting the evolving relationship between them in time. This matrix aligns itself with a simulation of a CAS, the primary interest being the VR matrix' ability to be both an interactive interface enabling exploration of the evidential material from different points of access, and a construction able to reveal its procedural work; a dynamic that elicits the creation of meaning by including the reasoning behind the chosen archival material, the product of the process, and the process itself
    corecore